Optimistic No-regret Algorithms for Discrete Caching
نویسندگان
چکیده
We take a systematic look at the problem of storing whole files in cache with limited capacity context optimistic learning, where caching policy has access to prediction oracle (provided by, e.g., Neural Network). The successive file requests are assumed be generated by an adversary, and no assumption is made on accuracy oracle. In this setting, we provide universal lower bound for prediction-assisted online proceed design suite policies range performance-complexity trade-offs. All proposed offer sublinear regret bounds commensurate Our results substantially improve upon all recently-proposed policies, which, being unable exploit predictions, only O(?T) regret. pursuit, design, best our knowledge, first comprehensive Follow-the-Perturbed leader policy, which generalizes beyond problem. also study different sizes bipartite network Finally, evaluate efficacy through extensive numerical experiments using real-world traces.
منابع مشابه
No-regret Algorithms for Online Convex Programs
Online convex programming has recently emerged as a powerful primitive for designing machine learning algorithms. For example, OCP can be used for learning a linear classifier, dynamically rebalancing a binary search tree, finding the shortest path in a graph with unknown edge lengths, solving a structured classification problem, or finding a good strategy in an extensive-form game. Several res...
متن کاملNo-regret algorithms for Online Convex Programs
Online convex programming has recently emerged as a powerful primitive for designing machine learning algorithms. For example, OCP can be used for learning a linear classifier, dynamically rebalancing a binary search tree, finding the shortest path in a graph with unknown edge lengths, solving a structured classification problem, or finding a good strategy in an extensive-form game. Several res...
متن کاملNo-regret algorithms for structured prediction problems
No-regret algorithms are a popular class of online learning rules. Unfortunately, most no-regret algorithms assume that the set Y of allowable hypotheses is small and discrete. We consider instead prediction problems where Y has internal structure: Y might be the set of strategies in a game like poker, the set of paths in a graph, or the set of configurations of a data structure like a rebalanc...
متن کاملNo-Regret Algorithms for Unconstrained Online Convex Optimization
Some of the most compelling applications of online convex optimization, including online prediction and classification, are unconstrained: the natural feasible set is R. Existing algorithms fail to achieve sub-linear regret in this setting unless constraints on the comparator point x̊ are known in advance. We present algorithms that, without such prior knowledge, offer near-optimal regret bounds...
متن کاملNo-Regret Algorithms for Heavy-Tailed Linear Bandits
We analyze the problem of linear bandits under heavy tailed noise. Most of the work on linear bandits has been based on the assumption of bounded or sub-Gaussian noise. This assumption however is often violated in common scenarios such as financial markets. We present two algorithms to tackle this problem: one based on dynamic truncation and one based on a median of means estimator. We show tha...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ACM on measurement and analysis of computing systems
سال: 2022
ISSN: ['2476-1249']
DOI: https://doi.org/10.1145/3570608